Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
ACM Transactions on Asian and Low-Resource Language Information Processing ; 21(5), 2022.
Article in English | Scopus | ID: covidwho-2299916

ABSTRACT

Emotions, the building blocks of the human intellect, play a vital role in Artificial Intelligence (AI). For a robust AI-based machine, it is important that the machine understands human emotions. COVID-19 has introduced the world to no-touch intelligent systems. With an influx of users, it is critical to create devices that can communicate in a local dialect. A multilingual system is required in countries like India, which has a large population and a diverse range of languages. Given the importance of multilingual emotion recognition, this research introduces BERIS, an Indian language emotion detection system. From the Indian sound recording, BERIS estimates both acoustic and textual characteristics. To extract the textual features, we used Multilingual Bidirectional Encoder Representations from Transformers. For acoustics, BERIS computes the Mel Frequency Cepstral Coefficients and Linear Prediction coefficients, and Pitch. The features extracted are merged in a linear array. Since the dialogues are of varied lengths, the data are normalized to have arrays of equal length. Finally, we split the data into training and validated set to construct a predictive model. The model can predict emotions from the new input. On all the datasets presented, quantitative and qualitative evaluations show that the proposed algorithm outperforms state-of-the-art approaches. © 2022 Association for Computing Machinery.

2.
14th IEEE International Conference on Computational Intelligence and Communication Networks, CICN 2022 ; : 90-95, 2022.
Article in English | Scopus | ID: covidwho-2228461

ABSTRACT

Understanding facial expressions is important for the interactions among humans as it conveys a lot about the person's identity and emotions. Research in human emotion recognition has become more popular nowadays due to the advances in the machine learning and deep learning techniques. However, the spread of COVID-19, and the need for wearing masks in the public has impacted the current emotion recognition models' performance. Therefore, improving the performance of these models requires datasets with masked faces. In this paper, we propose a model to generate realistic face masks using generative adversarial network models, in particular image inpainting. The MAFA dataset was used to train the generative image inpainting model. In addition, a face detection model was proposed to identify the mask area. The model was evaluated using the MAFA and CelebA datasets, and promising results were obtained. © 2022 IEEE.

3.
5th International Conference on Communication, Device and Networking, ICCDN 2021 ; 902:401-412, 2023.
Article in English | Scopus | ID: covidwho-2048170

ABSTRACT

The COVID-19 pandemic has produced a significant impact on society. Apart from its deadliest attack on human health and economy, it has also been affecting the mental stability of human being at a larger scale. Though vaccination has been partially successful to prevent further virus outreach, it is leaving behind typical health-related complications even after surviving from the disease. This research work mainly focuses on human emotion prediction analysis in post-COVID-19 period. In this work, a considerable amount of data collection has been performed from various digital sources, viz. Facebook, e-newspapers, and digital news houses. Three distinct classes of emotion, i.e., analytical, depressed, and angry, have been considered. Finally, the predictive analysis is performed using four deep learning models, viz. CNN, RNN, LSTM, and Bi-LSTM, based on digital media responses. Maximum accuracy of 97% is obtained from LSTM model. It has been observed that the post-COVID-19 crisis has mostly depressed the human being. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

4.
Human Computer Interaction thematic area of the 24th International Conference on Human-Computer Interaction, HCII 2022 ; 13303 LNCS:329-339, 2022.
Article in English | Scopus | ID: covidwho-1919628

ABSTRACT

Emotion recognition based on facial expressions is an increasingly important area in Human-Computer Interaction Research. Despite the many challenges of computer-based facial emotion recognition like, e.g., the huge variability of human facial features, cultural differences, and the differentiation between primary and secondary emotions, there are more and more systems and approaches focusing on facial emotion recognition. These technologies already offer many possibilities to automatically recognize human emotions. As part of a research project described in this paper, these technologies are used to investigate whether and how they can support virtual human interactions. More and more meetings are taking place virtually due to the Covid-19 pandemic and the advancing digitalization. Therefore, the face of the attendees is often the only visible part that indicates emotional states. This paper focuses on outlining why emotions and their recognition are important and in which areas the use of automated emotion detection tools seems to be promising. We do so by showing potential use cases for visual emotion recognition in the professional environment. In a nutshell, the research project aims to investigate whether facial emotion recognition software can help to improve self-reflection on emotions and the quality and experience of virtual human interactions. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

5.
7th International Conference on e-Society, e-Learning and e-Technologies, ICSLT 2021 ; : 105-110, 2021.
Article in English | Scopus | ID: covidwho-1604068

ABSTRACT

Human emotions and sentiments are dynamic by nature. Nowadays, social networks have become a key resource for human communication and a faithful representation of this dynamism. This fact poses major challenges to those systems addressing sentiment analysis. Therefore, having systems capable of inferring this dynamism has become a key issue. In this paper we introduce Emoweb 2.0, a prototype for dynamic sentiment analysis of Twitter data. A well-known lexicon is taken as starting basis and new words are appended by an unsupervised learning algorithm governing the process. Sentiment values of new words are calculated and dynamically updated depending on the trends detected. Tweet sentiment scores are also computed during the process. A visualization module is included to observe word sentiment fluctuations over time. The experiment performed is based on the ongoing COVID-19 pandemic showing promising results. © 2021 ACM.

6.
IEEE Internet Things J ; 8(23): 16863-16871, 2021 Dec.
Article in English | MEDLINE | ID: covidwho-1526324

ABSTRACT

Human emotions are strongly coupled with physical and mental health of any individual. While emotions exbibit complex physiological and biological phenomenon, yet studies reveal that physiological signals can be used as an indirect measure of emotions. In unprecedented circumstances alike the coronavirus (Covid-19) outbreak, a remote Internet of Things (IoT) enabled solution, coupled with AI can interpret and communicate emotions to serve substantially in healthcare and related fields. This work proposes an integrated IoT framework that enables wireless communication of physiological signals to data processing hub where long short-term memory (LSTM)-based emotion recognition is performed. The proposed framework offers real-time communication and recognition of emotions that enables health monitoring and distance learning support amidst pandemics. In this study, the achieved results are very promising. In the proposed IoT protocols (TS-MAC and R-MAC), ultralow latency of 1 ms is achieved. R-MAC also offers improved reliability in comparison to state of the art. In addition, the proposed deep learning scheme offers high performance ([Formula: see text]-score) of 95%. The achieved results in communications and AI match the interdependency requirements of deep learning and IoT frameworks, thus ensuring the suitability of proposed work in distance learning, student engagement, healthcare, emotion support, and general wellbeing.

7.
Sensors (Basel) ; 21(16)2021 Aug 18.
Article in English | MEDLINE | ID: covidwho-1367892

ABSTRACT

With the advancement of human-computer interaction, robotics, and especially humanoid robots, there is an increasing trend for human-to-human communications over online platforms (e.g., zoom). This has become more significant in recent years due to the Covid-19 pandemic situation. The increased use of online platforms for communication signifies the need to build efficient and more interactive human emotion recognition systems. In a human emotion recognition system, the physiological signals of human beings are collected, analyzed, and processed with the help of dedicated learning techniques and algorithms. With the proliferation of emerging technologies, e.g., the Internet of Things (IoT), future Internet, and artificial intelligence, there is a high demand for building scalable, robust, efficient, and trustworthy human recognition systems. In this paper, we present the development and progress in sensors and technologies to detect human emotions. We review the state-of-the-art sensors used for human emotion recognition and different types of activity monitoring. We present the design challenges and provide practical references of such human emotion recognition systems in the real world. Finally, we discuss the current trends in applications and explore the future research directions to address issues, e.g., scalability, security, trust, privacy, transparency, and decentralization.


Subject(s)
Artificial Intelligence , COVID-19 , Emotions , Humans , Pandemics , SARS-CoV-2
SELECTION OF CITATIONS
SEARCH DETAIL